-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] track inference model feature usage per node #79752
[ML] track inference model feature usage per node #79752
Conversation
Pinging @elastic/ml-core (Team:ML) |
@@ -406,13 +406,13 @@ void featureUsed(LicensedFeature feature) { | |||
usage.put(new FeatureUsage(feature, null), epochMillisProvider.getAsLong()); | |||
} | |||
|
|||
void enableUsageTracking(LicensedFeature feature, String contextName) { | |||
public void enableUsageTracking(LicensedFeature feature, String contextName) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rjernst I made these public for testing purposes, I need to check if tracking is enabled/disabled for inference in a different package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a shame to do this so TrainedModelDeploymentTaskTests
can verify TrainedModelDeploymentTask
calls start/stopTracking
. Instead of statically importing MachineLearning.ML_MODEL_INFERENCE_FEATURE
you could pass the LicensedFeature
as a ctor parameter and mock a LicensedFeature
in the tests which can be verified
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed, I am passing in the value now as @davidkyle suggests, these changes are removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note for future reference there is a test class called MockLicenseState
that makes these package private methods public for testing purposes, so there shouldn't ever be a need to make them public on XPackLicenseState.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
I think it would be useful to track usage of DFA models and PyTorch models separately. I know we consider both inference and the code is structured that way but for reporting it's nice to know which is used. Can you add a MachineLearning.ML_PYTORCH_INFERENCE_FEATURE
field or similar pls
@@ -406,13 +406,13 @@ void featureUsed(LicensedFeature feature) { | |||
usage.put(new FeatureUsage(feature, null), epochMillisProvider.getAsLong()); | |||
} | |||
|
|||
void enableUsageTracking(LicensedFeature feature, String contextName) { | |||
public void enableUsageTracking(LicensedFeature feature, String contextName) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a shame to do this so TrainedModelDeploymentTaskTests
can verify TrainedModelDeploymentTask
calls start/stopTracking
. Instead of statically importing MachineLearning.ML_MODEL_INFERENCE_FEATURE
you could pass the LicensedFeature
as a ctor parameter and mock a LicensedFeature
in the tests which can be verified
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* upstream/master: (209 commits) Enforce license expiration (elastic#79671) TSDB: Automatically add timestamp mapper (elastic#79136) [DOCS] `_id` is required for bulk API's `update` action (elastic#79774) EQL: Add optional fields and limit joining keys on non-null values only (elastic#79677) [DOCS] Document range enrich policy (elastic#79607) [DOCS] Fix typos in 8.0 security migration (elastic#79802) Allow listing older repositories (elastic#78244) [ML] track inference model feature usage per node (elastic#79752) Remove IncrementalClusterStateWriter & related code (elastic#79738) Reuse previous indices lookup when possible (elastic#79004) Reduce merging in PersistedClusterStateService (elastic#79793) SQL: Adjust JDBC docs to use milliseconds for timeouts (elastic#79628) Remove endpoint for freezing indices (elastic#78918) [ML] add timeout parameter for DELETE trained_models API (elastic#79739) [ML] wait for .ml-state-write alias to be readable (elastic#79731) [Docs] Update edgengram-tokenizer.asciidoc (elastic#79577) [Test][Transform] fix UpdateTransformActionRequestTests failure (elastic#79787) Limit CS Update Task Description Size (elastic#79443) Apply the reader wrapper on can_match source (elastic#78988) [DOCS] Adds new transform limitation item and a note to the tutorial (elastic#79479) ... # Conflicts: # server/src/main/java/org/elasticsearch/index/IndexMode.java # server/src/test/java/org/elasticsearch/index/TimeSeriesModeTests.java
This adds feature usage tracking for deployed inference models. The models are tracked under the existing, inference feature and contain context related to the model ID. I decided to track the feature via the allocation task to keep the logic similar between allocation tasks and licensed persistent tasks. closes: elastic#76452
This adds feature usage tracking for deployed inference models. The models are tracked under the existing, inference feature and contain context related to the model ID.
I decided to track the feature via the allocation task to keep the logic similar between allocation tasks and licensed persistent tasks.
closes: #76452